Medical Science and Smoking Bans


First the scientific evidence just isn’t there.  I am well aware of what the Surgeon Generals report says.  The problem is that it is mostly the same studies that were in the EPA report that was thrown out of court July 17, 1998.  I will not send the whole report but a link is provided here. http://www.forces.org/evidence/epafraud/files/osteen.htm Yes the case was overturned but strictly on a technicality.  The technicality is that it was a report and not policy therefore  the courts did not have jurisdiction.  All of the same studies were looked at by The Congressional Research Service published a report on the studies November 14, 1995.  http://www.forces.org/evidence/files/crs11-95.htm

Excerpt from the report:

It is clear that misclassification and recall bias plague ETS epidemiology studies. It is also clear from the simulations that modest, possible misclassification and recall bias rates can change the measured relative risk results, possibly in dramatic ways. Aside from smoking misclassification, however, attempts to correct for them have not taken place because there is currently no information available on how to carry out such corrections. It is possible that more research on the general question of misclassification will reduce the uncertainty now present in these ETS results, but such research will be difficult to perform because its methods, too, appear to be subject to considerable uncertainty.

OSHA had numerous workshops on the issue because of NIOSH’s recommendations, bur even there recommendation had the same problems.

NIOSH recognizes that these recent epidemiologic studies have several shortcomings: lack of objective measures for characterizing and quantifying exposures, failure to adjust for all confounding variables, potential misclassification of exsmokers as nonsmokers, unavailability of comparison groups that have not been exposed to ETS, and low statistical power. After over ten years of mulling over the scientific evidence OHSHA finally decided that the evidence just wasn’t there and here is their conclusion.

Environmental Tobacco Smoke (ETS)

Because the organic material in tobacco doesn’t burn completely, cigarette smoke contains more than 4,700 chemical compounds. Although OSHA has no regulation that addresses tobacco smoke as a whole, 29 CFR 1910.1000 Air contaminants, limits employee exposure to several of the main chemical components found in tobacco smoke. In normal situations, exposures would not exceed these permissible exposure limits (PELs), and, as a matter of prosecutorial discretion, OSHA will not apply the General Duty Clause to ETS.
After OSHA came out with this policy ASH one of the biggest anti-smoking lobby’s filed suit against OSHA.  They will not tell you this but they dropped the suit because the courts will not touch a case unless the risk ratios are higher then 2 which the vast majority of the studies don’t even come close. Here are their words not mine;

“Action on Smoking and Health (ASH) has agreed to dismiss its law suit against the Occupational Safety and Health Administration [OSHA] to avoid serious harm to the nonsmokers’ rights movement from an adverse action OSHA had threatened to take if forced by the law suit to do so.”

Here is the proof on risk ratios.

Reference Manual on Scientific Evidence

http://www.fjc.gov/public/pdf.nsf/lookup/sciman00.pdf/$file/sciman00.pdf

The activist keep claiming the evidence is clear that there is no safe level of second hand smoke.  Well since they use the term evidence let’s see how the law views their evidence.  For one thing both the Faked EPA report and the Surgeon Generals report were not studies but Meta-Analysis on cherry picked studies, so lets see what the courts have to say about that.

From the Reference Manual on Scientific Evidence

Page 389

Much has been written about meta-analysis recently, and some experts consider the problems
of meta-analysis to outweigh the benefits at the present time. For example, Bailar has written the
following:
[P]roblems have been so frequent and so deep, and overstatements of the strength of conclusions so extreme, that one might well conclude there is something seriously and fundamentally wrong with the method. For the present . . . I still prefer the thoughtful, old-fashioned review of the literature by a knowledgeable expert who explains and defends the judgments that are presented. We have not yet reached a stage where these judgments can be passed on, even in part, to a formalized process such as meta-analysis.
John C. Bailar III, Assessing Assessments, 277 Science 528, 529 (1997) (reviewing Morton Hunt, How
Science Takes Stock (1997)); see also Point/Counterpoint: Meta-analysis of Observational Studies, 140 Am.
J. Epidemiology 770 (1994).
128. See DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941, 945 & n.6 (3d Cir. 1990) (“Epidemiological
studies do not provide direct evidence that a particular plaintiff was injured by exposure to a
substance.”); Smith v. Ortho Pharm. Corp., 770 F. Supp. 1561, 1577 (N.D. Ga. 1991); Grassis v.
Johns-Manville Corp., 591 A.2d 671, 675 (N.J. Super. Ct. App. Div. 1991); Michael Dore, A Commentary
on the Use of Epidemiological Evidence in Demonstrating Cause-in-Fact, 7 Harv. Envtl. L. Rev. 429, 436
(1983).

Now this was written by a well respected Professor John C. Bailar III and in essence he is saying that Meta-analysis on observational studies is unreliable and all of the studies in both reports fit that category.
Here are some of his credentials.
RESEARCH INTERESTS
* Trends in cancer
* Assessing health risks (such as new chemicals)
* Misconduct in science
OTHER ACTIVITIES
* Editorial Board, New England Journal of Medicine
* Board (Chair), National Institute of Statistical Science

Now the anti-smoker activist will tell you that RR’s(Relative Risks) less than 2 are perfectly acceptable if they are repeated often enough. They will tell you that the number 2 was picked by the tobacco companies to call the ETS studies Junk Science. OK again from the Reference Manual on Scientific Evidence

Page 392

The threshold for concluding that an agent was more likely than not the
cause of an individual’s disease is a relative risk greater than 2.0. Recall that a
relative risk of 1.0 means that the agent has no effect on the incidence of disease.
When the relative risk reaches 2.0, the agent is responsible for an equal number
of cases of disease as all other background causes. Thus, a relative risk of 2.0
(with certain qualifications noted below) implies a 50% likelihood that an exposed
individual’s disease was caused by the agent. A relative risk greater than
2.0 would permit an inference that an individual plaintiff’s disease was more
likely than not caused by the implicated agent.139 A substantial number of courts
in a variety of toxic substances cases have accepted this reasoning.

So the courts won’t even touch anything less then a 2. Should we be passing laws based on scientific evidence that couldn’t even stand up in a court of law? Especially since the Epidemiologist can’t even agree on RR’s this low. From an award winning article in Science. Epidemiology faces its limits

“If it’s a 1.5 relative risk, and it’s only one study and even a very good one, you scratch your chin and say maybe.” Some epidemiologists say that an association with an increased risk of tens of percent might be believed if it shows up consistently in many different studies.
That’s the rationale for meta-analysis — a technique for combining many ambiguous studies to see whether they tend in the same direction (Science, 3 August 1990, p. 476).
But when Science asked epidemiologists to identify weak associations that are now considered convincing because they show up repeatedly, opinions were divided — consistently.

Do you think that you should be passing laws based on so called science that couldn’t stand up in a court of law???

Yes the anti-smoking lobbyist will try to tell you that low risk ratios are perfectly acceptable if they are repeated enough.  Well an award winning article in Science disagrees.

From Epidemiology faces its limits.

“If it’s a 1.5 relative risk, and it’s only one study and even a very good one, you scratch your chin and say maybe.” Some epidemiologists say that an association with an increased risk of tens of percent might be believed if it shows up consistently in many different studies.

That’s the rationale for meta-analysis — a technique for combining many ambiguous studies to see whether they tend in the same direction (Science, 3 August 1990, p. 476).

But when Science asked epidemiologists to identify weak associations that are now considered convincing because they show up repeatedly, opinions were divided — consistently.

Take the question of alcohol and breast cancer.

More than 50 studies have been done, and more than 30 have reported that women who drink alcohol have a 50% increased risk of breast cancer.

Willett, whose Nurse’s Health Study was among those that showed a positive association, calls it “highly probable” that alcohol increases the risk of breast cancer.

Among other compelling factors, he says, the finding has been “reproduced in many countries with many investigators controlling for lots of confounding variables, and the association keeps coming up.” But Greenland isn’t so sure.

“I’d bet right now there isn’t a consensus.

I do know just from talking to people that some hold it’s a risk factor and others deny it.” Another Boston-based epidemiologist, who prefers to remain anonymous, says nobody is convinced of the breast cancer-alcohol connection “except Walt Willett.” Another example is long-term oral contraceptive use and breast cancer, a link that has been studied for a quarter of a century.

Thomas of the Fred Hutchinson Cancer Research Center says he did a meta-analysis in 1991 and found a dozen studies showing a believable association in younger women who were long-time users of oral contraceptives.

“The bottom line,” he says, “is it’s taken us over 20 years of studies before some consistency starts to emerge.

Now it’s fairly clear there’s a modest risk.” But Noel Weiss of the University of Washington says he did a similar review of the data that left him unconvinced.

“We don’t know yet,” he says.

“There is a small increased risk associated [with oral contraceptive use], but what that represents is unclear.” Mary Charleson, a Cornell Medical Center epidemiologist, calls the association “questionable.” Marcia Angell calls it “still controversial.” Consistency has a catch, after all, explains David Sackett of Oxford University: It is persuasive only if the studies use different architectures, methodologies, and subject groups and still come up with the same results.

If the studies have the same design and “if there’s an inherent bias,” he explains, “it wouldn’t make any difference how many times it’s replicated.

Bias times 12 is still bias.” What’s more, the epidemiologists interviewed by Science point out that an apparently consistent body of published reports showing a positive association between a risk factor and a disease may leave out other, negative findings that never saw the light of day.

“Authors and investigators are worried that there’s a bias against negative studies,” and that they will not be able to get them published in the better journals, if at all, says Angell of the NEJM.

“And so they’ll try very hard to convert what is essentially a negative study into a positive study by hanging on to very, very small risks or seizing on one positive aspect of a study that is by and large negative.” Or, as one National Institute of Environmental Health Sciences researcher puts it, asking for anonymity, “Investigators who find an effect get support, and investigators who don’t find an effect don’t get support.

When times are tough it becomes extremely difficult for investigators to be objective.” When asked why they so willingly publish inconclusive research, epidemiologists say they have an obligation to make the data public and justify the years of work.

They also argue that if the link is real, the public health effect may be so dramatic that it would be irresponsible not to publish it.

The University of North Carolina’s Savitz, for instance, who recently claimed a possible link between EMF exposure and a tens of percent increase in the risk of breast cancer, says: “This is minute … But you could make an argument that even if this evidence is 1000-fold less than for [an EMF-leukemia link], it is still more important, because the disease is 1000-fold more prevalent.” One of the more pervasive arguments for publishing weak effects, Rothman adds, is that any real effect may be stronger than the reported one.

Any mismeasurement of exposure, so the argument goes, will only serve to reduce the observed size of the association.

Once researchers learn how to measure exposure correctly, in other words, the actual association will turn out to be bigger — and thus more critical to public health.

That was the case in studies of steelworkers and lung cancer decades ago, says Robins.

Early studies saw only a weak association, but once researchers homed in on coke-oven workers, the group most exposed to the carcinogens, the relative risk shot up.

None of the epidemiologists who spoke to Science could recall any more recent parallels, however.

An unholy alliance There would be few drawbacks to publishing weak, uncertain associations if epidemiologists operated in a vacuum, wrote Brian Mac-Mahon, professor emeritus of epidemiology at Harvard, in an April 1994 editorial in the Journal of the National Cancer Institute.

But they do not, he said.

“And, however cautiously the investigator may report his conclusions and stress the need for further evaluation,” he added, “much of the press will pay little heed to such cautions … By the time the information reaches the public mind, via print or screen, the tentative suggestion is likely to be interpreted as a fact.” This is what one epidemiologist calls the “unholy alliance” between epidemiology, the journals, and the lay press.

http://nasw.org/awards/1996/96Taubesarticle.htm

Speaking of publication bias a study in the BMJ Reanalysis of epidemiological evidence on lung cancer and passive smoking showed that such bias does exist.

http://banthebanwisconsin.com/Documents/reanalysis.pdf

Conclusions

Conclusion A modest degree of publication bias

leads to a substantial reduction in the relative risk and

to a weaker level of significance, suggesting that the

published estimate of the increased risk of lung

cancer associated with environmental tobacco smoke

needs to be interpreted with caution..

Meta-analysis was designed for drug studies where bias and confounding variables can be kept to a minimum which is not the case with the studies on ETS. It also allows the author to skew the results to fit their agenda and should be viewed with a jaundice eye.

Beware of Meta-analyses Bearing False Gifts.

The interpretation of a meta-analysis is potentially subject to an author’s bias by what inclusion and exclusion criteria is selected, the type of statistical evaluation performed, decisions made on how to deal with disparities between the trials, and how the subsequent results are presented.

Whether the conclusions of a meta-analysis are road reaching or limited can be affected by the inherent bias that the author of the meta-analysis brings to the study.

http://www.improvingmedicalstatistics.com/Meta%20Beware%20gifts.htm

. Multiple Studies
Meta-analysis is the statistical technique used to pool results from different studies. Originally it was developed for summarizing the results of
homogeneous randomized clinical trials, which remains its legitimate application. However, using meta-analysis for pooling the results of diverse observational studies is fraught with irresolvable difficulties. The procedure gives different weights to studies, primarily in
relation to their size. However, meta-analysis does not pool the discrete data that originated each result, but only the final results of each study regardless of whether they are concordant or discordant, credible or not.
The procedure does not discriminate according to characteristics of each study, such as its design, data collection, standardizations, biases, confounders, adjustments, statistical procedures, etc. Meta-analysis,
therefore, produces only a weighted average of the final numerical results of the studies, but does not standardize, relieve, or control for differential
corruptions that may be present in each study. If characteristics other than study size are used in weighing studies (e.g. study quality), those characteristics are likely to be discretionary, judgmental, and conducive to different meta-analysis results when handled by different analysts. Statistical tests of homogeneity for a group of studies do exist, but again they relate only to numerical homogeneity and say nothing about
other determinant characteristics. The Reference Guide to Epidemiology ofthe Federal Judicial Center’s Reference Manual on Scientific Evidence
warns that [a] final problem with meta-analyses is that they generate a single estimate of risk and may lead to a false sense of security regarding the certainty of the estimate. People often tend to have an inordinate belief in the validity of findings when a single number is attached to them, and many of the difficulties that may arise in conducting a meta-analysis,
especially of observational studies like epidemiologic ones, may consequently be overlooked.
Therefore, with the exception of its use for summarizing
homogeneous randomized clinical studies, it should be manifest that metaanalysis can be used as a strategy to contrive meaning from studies that have no apparent meaning.

indeed, numerical transformations and renditions impart an
undeserved sense of accuracy and credibility to a background of vagueness
caused by study design deficiencies, asymmetries in data collection,
statistical error, biases, confounders, limitations of adjustments and
standardizations, prejudice, and more. Tests of statistical significance are
equally speculative, being no more than approximate summaries of
metaphorical primary data.
As a further cautionary note, the greater the complexity of the
statistical analysis in epidemiologic reports, the weaker the data is likely to
be. Known as data dredging, epidemiologists sometimes squeeze every
conceivable signal out of what is usually a congeries of data.
How do epidemiologists approach the inherent fragility of their
data? . . . .

EPIDEMIOLOGISTS AND UNCERTAINTY
Epidemiologists react to uncertainty by taking contrasting positions.
At one end, the mainstream profession is focusing on the specificity and
accuracy of data collection, and on controlling as best as possible for biases
and confounders. This concern may be reflected further in cautious,
balanced, and truthful representations of epidemiologic uncertainty to the
media, the public, and policymakers.
At the opposite end, a long tradition of advocacy views epidemiology
as a fungible tool for the advancement of goals aligned with personal,
cultural, and political views, most often reflecting a consuming enthusiasm
for what the French would call dirigisme. In too many instances advocacy
has prevailed over interpretive restraint, leading to confused and
quarrelsome debates.
Advocacy positions are typically supported on the grounds that
“[d]espite philosophic injunctions concerning inductive inference, criteria
have commonly been used to make such inferences. The justification
offered has been that the exigencies of public health problems demand
action and that despite imperfect knowledge causal inferences must be
made.
The circularity of such a justification is manifest, when one realizes
the large number of exigencies of public health that have been created by
epidemiologists on the basis of knowledge that is marginal, if not wholly
conjectural.
Undoubtedly advocacy has valid roles, but it should be apparent that
its legitimacy is proportional to the factual reliability of what is being
sustained. Epidemiologists are divided on this issue. A new “paradigm” of
epidemiology has been proposed, one that shows little patience with the
scientific method, while being still reluctant to be perceived as nonscientific

http://www.wlf.org/upload/GoriWP.pdf

As far as the economic impact several states have found the enforcement to be quite costly.

OHIO.

TOLEDO (AP) – County health departments are running up steep
bills in their efforts to stamp out smoke across Ohio.

A year after the state’s workplace smoking ban went into effect, some county health departments have found enforcement to be too
costly. At least a dozen local entities have turned over inspection and violation duties to the Ohio Department of Health.

The Toledo-Lucas County health department alone has spent $40,000 hunting down violators, while banking just $630 in fines. Costs stem from overtime, mileage and other added expenses.

http://www.indianasnewscenter.com/news/local/18326849.html

And how has it affected business.

Most of the fears around the ban have been realized. Revenue at restaurants, especially those that rely heavily on alcohol sales, have dwindled, sometimes fatally. The loss, though, has been distributed unevenly, with some restaurants reporting 80 percent decreases and others witnessing a bump in revenue.

Some businesses have deliberately disregarded the ban and allow smokers to puff away as though nothing has changed — accruing complaints, citations and fines along the way.

Other restaurants have accepted the ban and found creative ways to keep their businesses afloat.

http://www.chroniclet.com/2008/04/27/one-year-later-how-have-area-businesses-fared-since-the-smoking-ban/

And of course the ban advocates hold up New York as a shining example.

CIG BAN? WHAT CIG BAN?

May 27, 2007 — While Mayor Bloomberg tries to make the world safe from greenhouse gases, his cigarette ban is going up in smoke.

Scores of trendy clubs and neighborhood pubs across the five boroughs have become smoking speakeasies, where bartenders and bouncers regularly ignore the prohibition launched in 2003.

The Post spotted scofflaw smokers openly puffing away in a dozen bars and clubs in Manhattan, Brooklyn, Queens and Staten Island during the past few weeks – including celebrity hangouts Bungalow 8, Tenjune, Butter, Marquee, Plumm and Guest House.

http://www.nypost.com/seven/05272007/news/regionalnews/cig_ban__what_cig_ban__regionalnews_angela_montefinise.htm

Surgeon General Trades Integrity for Advocacy

The U.S. Surgeon General recently appeared in a TV interview on PBS’s News Hour on the day his new report on secondhand smoke was released. He stated repeatedly and emphatically that there was no safe level of exposure to secondhand smoke.(This is the “no threshold” theory.”) But the very report he was talking about doesn’t support what he was saying. It draws no such conclusion, nor does it provide any data to support such a conclusion. The SG was simply not being honest.

The “no threshold” theory about cancer has never been shown to be true for ANY chemical, much less secondhand smoke. The theory that if something is carcinogenic at high doses it must also be proportionately so at small doses simply does not fit the real world. At least ten elements (including iron and oxygen) are carcinogens at high doses but essential to human life in small doses. And some carcinogens, such as selenium and Vitamin A, are proven anti-carcinogens at low doses. These facts contradict the “no-threshold” theory. Thresholds are a law of nature; the mere title of one treatise says it all:“Environmental Carcinogenesis—The Threshold Principle: A Law of Nature.” The authors, Claus and Bolander, state that the no-threshold theory about any dose being dangerous ignores “all the fundamental principles of cell biology.”

Dr. Elizabeth Miller, former president of the American Association for Research on Cancer, has stated:“Chemical carcinogenesis is a strongly dose-dependent phenomenon.” This is opposite to the claim by smoking ban advocates—including the surgeon general—that it is not dose dependent, that any dose is a health hazard (no threshold.)

The no-threshold theory, when applied to secondhand smoke,“incorporates unsound assumptions that are not valid,” says an article by Drs. Huber (pulmonary specialist), Brockie (cardiologist), and Mahajan (a hospital director of internal medicine and professor of medicine.)

Furthermore, thresholds are known to exist for mainstream tobacco smoke in total as well as for each of the individual carcinogens known to exist in it. It is preposterous to claim, as the SG does, that secondhand smoke—which is more than 100,000 times more dilute than mainstream smoke—has no threshold, even though mainstream smoke does. This turns the dose-response principle of epidemiology on its head and means secondhand smoke can be more dangerous than actual smoking! Ridiculous!
http://www.amlibpub.com/essays/surgeon-general-trades-integrity-for.html

On the surgeon Generals Report from Where’s the Consensus on Secondhand Smoke?

More than a year has passed since U.S. Surgeon General Richard Carmona said, “The debate is over. The science is clear: Secondhand smoke is not a mere annoyance, but a serious health hazard.”

At the time, Carmona released a seemingly impressive 727-page report on secondhand smoke, the introduction of which claims secondhand smoke killed approximately 50,000 nonsmoking adults and children in 2005.

Carmona’s report stated the new orthodoxy in the anti-smoking establishment: There is a “consensus” on the dangers of secondhand smoke. But did his report actually make the case?
Junk Science and Courtrooms

Understanding Carmona’s report requires familiarity with a different report–the Federal Judicial Center’s 2000 “Reference Manual on Scientific Evidence, Second Edition,” the official guide for judges to understand and rule on science introduced in courtrooms.

According to the manual, nearly all the studies cited in Carmona’s report wouldn’t pass muster in a court of law because they are observational studies, the sample sizes are too small, or the effects they show are too negligible to be reliable.

For example, the Reference Manual states, “the threshold for concluding that an agent was more likely than not the cause of an individual’s disease is a relative risk greater than 2.0.” Few of the studies Carmona cites found relative risks this large, and most found risks in a range that included 1.0, which means exposure to secondhand smoke had no effect on the incidence of disease. In the world of real science, that’s a knockout blow.

Most of the research Carmona cites was rejected by a federal judge in 1993, when the Environmental Protection Agency (EPA) first tried to classify secondhand smoke as a human carcinogen. The judge said EPA cherry-picked studies to support its position, misrepresented the most important findings, and failed to honor scientific standards. Carmona’s report relies on the same studies and makes the same claims EPA did a decade ago.
Missing Study

Did Carmona and coauthors cherry-pick the data? Absolutely. They ignore the largest and most credible study ever conducted on spouses of smokers, by Enstrom and Kabat, published in the May 12, 2003 issue of the British Medical Journal. The authors found:

“The results do not support a causal relationship between environmental tobacco smoke and tobacco-related mortality. The association between tobacco smoke and coronary heart disease and lung cancer may be considerably weaker than generally believed.”

Carmona mentions the Enstrom study just once, in an appendix listing studies too recent to include in the report. But Enstrom’s study was published four years ago, and Carmona cites more recent studies. In fact, Carmona’s principal “findings” were taken from a 2005 report–not a scientific study, merely another report–from California’s Clean Air Resources Board, mostly citing the very studies the federal judge rejected in 1993.

http://www.heartland.org/Article.cfm?artId=22150

Is there any wonder that cherry picking was done in the Surgeon generals report. The chief scientific editor was one that worked on the faked EPA report.

Jonathan M. Samet, M.D., and the 1992 EPA Report
One might wonder how omissions, distortions, and exaggerations like those pointed out above could occur in a document as important as a Surgeon General’s Report on ETS. To better understand this phenomena one must realize that Samet has dealt with the ETS issue in this manner for many years. In particular, he played a major role in the epidemiologic analysis for the December 1992 report on Health Effects of Passive Smoking: Lung Cancer and Other Disorders: The Report of the United States Environmental Protection Agency [120]. This EPA report classified ETS as a Group A human carcinogen, which causes about 3,000 lung cancer deaths per year in the U.S. The findings from this report were used in the Broin v. Philip Morris litigation described above.

The epidemiologic methodology and conclusions of the EPA report have been severely criticized. One of the harshest critiques is the 92-page Decision issued by Federal Judge William L. Osteen on July 17, 1998, which overturned the report in the U.S. District Court [121]. For instance, in his conclusion Judge Osteen wrote: “In conducting the Assessment, EPA deemed it biologically plausible that ETS was a carcinogen. EPA’s theory was premised on the similarities between MS [mainstream smoke], SS [sidestream smoke], and ETS. In other chapters, the Agency used MS and ETS dissimilarities to justify methodology. Recognizing problems, EPA attempted to confirm the theory with epidemiologic studies. After choosing a portion of the studies, EPA did not find a statistically significant association. EPA then claimed the bioplausibility theory, renominated the a priori hypothesis, justified a more lenient methodology. With a new methodology, EPA demonstrated from the 88 selected studies a very low relative risk for lung cancer based on ETS exposure. Based on its original theory and the weak evidence of association, EPA concluded the evidence showed a causal relationship between cancer and ETS. The administrative record contains glaring deficiencies….” From Defending legitimate epidemiologic research: combating Lysenko pseudoscience linked later.

Lastly I am sure this study has been brought up a lot. Enstrom/Kabat.

It is the largest most comprehensive study ever done. It was started by the ACS but when it wasn’t producing numbers that fit their agenda they abandoned it.

Conclusions The results do not support a causal

relation between environmental tobacco smoke and

tobacco related mortality, although they do not rule

out a small effect. The association between exposure

to environmental tobacco smoke and coronary heart

disease and lung cancer may be considerably weaker

than generally believed.

http://www.bmj.com/cgi/reprint/326/7398/1057.pdf

Now those it tobacco control will try to tell you that this was a Big Tobacco study which is 100% false.

You should read Enstroms rebuttal;

Defending legitimate epidemiologic research: combating Lysenko pseudoscience

It is worth repeating the allegations in the Kessler decision, first to point out that they are the same false and misleading claims about the Enstrom/Kabat study by the ACS, Samet, Glantz, and others that are described above, and second to show how obviously incorrect they are. The Enstrom/Kabat study was not “CIAR-funded and managed” and was not “funded and managed by the tobacco industry through CIAR and Philip Morris.” Although the study was partially funded by CIAR, it was not managed by either CIAR or Philip Morris. Indeed, CIAR assigned its entire award for the study to UCLA in 1999 just before CIAR was dissolved as a condition of the Master Settlement Agreement [105]. CIAR did not even exist when my study was being completed. The study was conducted and published without any influence from the tobacco industry.

It is also interesting to note that the Senior Scientific Editor of the Surgeon Generals Report was a major player of the faked 1992 EPA report.

Jonathan M. Samet, M.D., and the 1992 EPA Report
One might wonder how omissions, distortions, and exaggerations like those pointed out above could occur in a document as important as a Surgeon General’s Report on ETS. To better understand this phenomena one must realize that Samet has dealt with the ETS issue in this manner for many years. In particular, he played a major role in the epidemiologic analysis for the December 1992 report on Health Effects of Passive Smoking: Lung Cancer and Other Disorders: The Report of the United States Environmental Protection Agency [120]. This EPA report classified ETS as a Group A human carcinogen, which causes about 3,000 lung cancer deaths per year in the U.S. The findings from this report were used in the Broin v. Philip Morris litigation described above.

The epidemiologic methodology and conclusions of the EPA report have been severely criticized. One of the harshest critiques is the 92-page Decision issued by Federal Judge William L. Osteen on July 17, 1998, which overturned the report in the U.S. District Court [121]. For instance, in his conclusion Judge Osteen wrote: “In conducting the Assessment, EPA deemed it biologically plausible that ETS was a carcinogen. EPA’s theory was premised on the similarities between MS [mainstream smoke], SS [sidestream smoke], and ETS. In other chapters, the Agency used MS and ETS dissimilarities to justify methodology. Recognizing problems, EPA attempted to confirm the theory with epidemiologic studies. After choosing a portion of the studies, EPA did not find a statistically significant association. EPA then claimed the bioplausibility theory, renominated the a priori hypothesis, justified a more lenient methodology. With a new methodology, EPA demonstrated from the 88 selected studies a very low relative risk for lung cancer based on ETS exposure. Based on its original theory and the weak evidence of association, EPA concluded the evidence showed a causal relationship between cancer and ETS. The administrative record contains glaring deficiencies….”

http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2164936

People like Stanton Glantz attacked the study before it was even released, and did everything in their power to keep it from being released. Why??? Because ACS also had the database and knew what the results were going to be.

I wholly recommend you read it. It is an eye opener.

Speaking of Stanton Glantz he also worked on the faked EPA report and was one of the authors of the Surgeon Generals Report and here is his philosophy on doing research.

If Glantz’s research is sometimes questioned, if not indeed questionable, he has only himself to blame. Here’s his policy statement on how he approaches research:

“…that’s the question that I have applied to my research relating to tobacco: If this comes out the way I think, will it make a difference [toward achieving the goal]. And if the answer is yes, then we do it, and if the answer is I don’t know, then we don’t bother. Okay? And that’s the criteria.”
–  Written Transcript Of 3-Day Conference Called “Revolt Against Tobacco,” L.A., 1992

It’s the criteria for advocacy, all right. Just not for objective science.

Enstrom isn’t the only legitimate scientist that has complained about interference and attacks by anti-smoking activists.

We did indeed learn of several cases where the organized interests abusing

epidemiology were industry or government, and communicated

with some of the researchers involved. But

these stories had already appeared in the literature (there

was one exception where the researchers were not yet

ready to go public). It turned out that all of the submissions

we received about stories that had not previously

appeared in the literature involved attacks on epidemiologists

or epidemiology by anti-tobacco activists. Two of

those submissions (both of which relate to the effects of

passive exposure to cigarette smoke (a.k.a. “second-hand

smoke” or environmental tobacco smoke (ETS)) appear

with this commentary.

Entire article here http://www.epi-perspectives.com/content/pdf/1742-5573-4-13.pdf

I think the best proof against the so called scientific evidence is the fact that no one has successfully sued the tobacco companies for ETS exposure. If the evidence is overwhelming as the anti-smoker activist claim don’t you think there would have been at least one successful lawsuit???

I sincerely hope you look more closely at these studies and see them for the wishcraft that they are. Again I say is it good government policy to write laws on science that couldn’t hold up in a court of law?????

About Marshall Keith

Broadcast Engineer Scuba Diver Photographer Fisherman Hunter Libertarian
This entry was posted in Smoking Ban, Uncategorized and tagged , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

1 Response to Medical Science and Smoking Bans

  1. This says everything I have found since I started my research in 1998 and says it better than I could. I found these exact things to be true. It has never been about health. It has been always about Profit. Smoking Bans are the most profitable Marketing Plan ever devised.

Leave a comment